我们提出SERP,这是3D点云的自我监督学习的框架。 SERP由编码器编码器架构组成,该体系结构将被扰动或损坏的点云作为输入和旨在重建原始点云而无需损坏。编码器在低维子空间中学习了点云的高级潜在表示,并恢复原始结构。在这项工作中,我们使用了基于变压器和基于点网的自动编码器。所提出的框架还解决了基于变形金刚的掩盖自动编码器的一些局限性,这些框架容易泄漏位置信息和不均匀的信息密度。我们在完整的Shapenet数据集上训练了模型,并将它们作为下游分类任务评估。我们已经表明,审慎的模型比从头开始训练的网络实现了0.5-1%的分类精度。此外,我们还提出了VASP:对矢量定量的自动编码器,用于对点云进行自我监督的表示学习,这些学习用于基于变压器的自动编码器的离散表示学习。
translated by 谷歌翻译
推理任务(例如答案句子选择(AS2)或事实验证)通常通过将基于变压器的模型作为单个句子对分类器来解决。最近的研究表明,这些任务受益于共同跨多个候选句子的建模依赖性。在本文中,我们首先表明,当用于多转化推理任务进行微调时,流行的预训练的变压器的性能很差。然后,我们提出了一个新的预训练目标,该目标对段落级的语义进行了对多个输入句子进行建模。我们对三个AS2和一个事实验证数据集的评估证明了我们的预训练技术优于传统技术的优势,用于变压器用作多用途推理任务的关节模型,以及用作句子对配方的跨编码器的优势这些任务。我们的代码和预培训模型将在https://github.com/amazon-research/wqa-multi-sentence-inference上发布。
translated by 谷歌翻译
图形神经网络(GNN)已被证明具有强大的表示能力,可以利用该图形在图形结构数据(例如分子和社交网络)上的下游预测任务。他们通常通过从单个顶点的$ K $ - 霍普社区或图表中的枚举步行中汇总信息来学习表示形式。先前的研究表明,将加权方案纳入GNN的有效性。但是,到目前为止,这主要仅限于$ k $ hop的社区GNNS。在本文中,我们旨在设计一种将加权方案纳入步行式GNN并分析其效果的算法。我们提出了一种称为Aware的新型GNN模型,该模型使用注意方案汇总了有关图中的步行的信息。这导致了在标准设置中用于图形预测任务的端到端监督学习方法,其中输入是图形的邻接和顶点信息,并且输出是图形的预测标签。然后,我们对Aware进行理论,经验和解释性分析。我们在简化设置中的理论分析确定了可证明的保证的成功条件,证明了图表信息如何在表示中编码,以及意识中的加权方案如何影响表示和学习绩效。我们的实验表明,在分子财产预测和社交网络领域的标准设置中,在图形预测任务中意识到的强劲表现。最后,我们的解释研究表明,意识到可以成功捕获输入图的重要子结构。该代码可在$ \ href {https://github.com/mehmetfdemirel/aware} {github} $上获得。
translated by 谷歌翻译
由于错误的自动和人类注释程序,NLP中的大型数据集遭受嘈杂的标签。我们研究了标签噪声的文本分类问题,并旨在通过分类器上通过辅助噪声模型捕获这种噪声。我们首先将概率得分分配给每个训练样本,通过训练早期纪要的损失的β混合模型来分配嘈杂的标签。然后,我们使用这个分数来选择性地引导噪声模型和分类器的学习。我们对两种文本分类任务的实证评估表明,我们的方法可以改善基线精度,并防止对噪声过度接近。
translated by 谷歌翻译
Learning policies from fixed offline datasets is a key challenge to scale up reinforcement learning (RL) algorithms towards practical applications. This is often because off-policy RL algorithms suffer from distributional shift, due to mismatch between dataset and the target policy, leading to high variance and over-estimation of value functions. In this work, we propose variance regularization for offline RL algorithms, using stationary distribution corrections. We show that by using Fenchel duality, we can avoid double sampling issues for computing the gradient of the variance regularizer. The proposed algorithm for offline variance regularization (OVAR) can be used to augment any existing offline policy optimization algorithms. We show that the regularizer leads to a lower bound to the offline policy optimization objective, which can help avoid over-estimation errors, and explains the benefits of our approach across a range of continuous control domains when compared to existing state-of-the-art algorithms.
translated by 谷歌翻译
Research has shown that climate change creates warmer temperatures and drier conditions, leading to longer wildfire seasons and increased wildfire risks in the United States. These factors have in turn led to increases in the frequency, extent, and severity of wildfires in recent years. Given the danger posed by wildland fires to people, property, wildlife, and the environment, there is an urgency to provide tools for effective wildfire management. Early detection of wildfires is essential to minimizing potentially catastrophic destruction. In this paper, we present our work on integrating multiple data sources in SmokeyNet, a deep learning model using spatio-temporal information to detect smoke from wildland fires. Camera image data is integrated with weather sensor measurements and processed by SmokeyNet to create a multimodal wildland fire smoke detection system. We present our results comparing performance in terms of both accuracy and time-to-detection for multimodal data vs. a single data source. With a time-to-detection of only a few minutes, SmokeyNet can serve as an automated early notification system, providing a useful tool in the fight against destructive wildfires.
translated by 谷歌翻译
Spoken language understanding (SLU) tasks have been studied for many decades in the speech research community, but have not received as much attention as lower-level tasks like speech and speaker recognition. In particular, there are not nearly as many SLU task benchmarks, and many of the existing ones use data that is not freely available to all researchers. Recent work has begun to introduce such benchmark datasets for several tasks. In this work, we introduce several new annotated SLU benchmark tasks based on freely available speech data, which complement existing benchmarks and address gaps in the SLU evaluation landscape. We contribute four tasks: question answering and summarization involve inference over longer speech sequences; named entity localization addresses the speech-specific task of locating the targeted content in the signal; dialog act classification identifies the function of a given speech utterance. We follow the blueprint of the Spoken Language Understanding Evaluation (SLUE) benchmark suite. In order to facilitate the development of SLU models that leverage the success of pre-trained speech representations, we will be publishing for each task (i) annotations for a relatively small fine-tuning set, (ii) annotated development and test sets, and (iii) baseline models for easy reproducibility and comparisons. In this work, we present the details of data collection and annotation and the performance of the baseline models. We also perform sensitivity analysis of pipeline models' performance (speech recognizer + text model) to the speech recognition accuracy, using more than 20 state-of-the-art speech recognition models.
translated by 谷歌翻译
In the process of materials discovery, chemists currently need to perform many laborious, time-consuming, and often dangerous lab experiments. To accelerate this process, we propose a framework for robots to assist chemists by performing lab experiments autonomously. The solution allows a general-purpose robot to perform diverse chemistry experiments and efficiently make use of available lab tools. Our system can load high-level descriptions of chemistry experiments, perceive a dynamic workspace, and autonomously plan the required actions and motions to perform the given chemistry experiments with common tools found in the existing lab environment. Our architecture uses a modified PDDLStream solver for integrated task and constrained motion planning, which generates plans and motions that are guaranteed to be safe by preventing collisions and spillage. We present a modular framework that can scale to many different experiments, actions, and lab tools. In this work, we demonstrate the utility of our framework on three pouring skills and two foundational chemical experiments for materials synthesis: solubility and recrystallization. More experiments and updated evaluations can be found at https://ac-rad.github.io/arc-icra2023.
translated by 谷歌翻译
This paper proposes an easy-to-compute upper bound for the overlap index between two probability distributions without requiring any knowledge of the distribution models. The computation of our bound is time-efficient and memory-efficient and only requires finite samples. The proposed bound shows its value in one-class classification and domain shift analysis. Specifically, in one-class classification, we build a novel one-class classifier by converting the bound into a confidence score function. Unlike most one-class classifiers, the training process is not needed for our classifier. Additionally, the experimental results show that our classifier \textcolor{\colorname}{can be accurate with} only a small number of in-class samples and outperforms many state-of-the-art methods on various datasets in different one-class classification scenarios. In domain shift analysis, we propose a theorem based on our bound. The theorem is useful in detecting the existence of domain shift and inferring data information. The detection and inference processes are both computation-efficient and memory-efficient. Our work shows significant promise toward broadening the applications of overlap-based metrics.
translated by 谷歌翻译
We propose a framework in which multiple entities collaborate to build a machine learning model while preserving privacy of their data. The approach utilizes feature embeddings from shared/per-entity feature extractors transforming data into a feature space for cooperation between entities. We propose two specific methods and compare them with a baseline method. In Shared Feature Extractor (SFE) Learning, the entities use a shared feature extractor to compute feature embeddings of samples. In Locally Trained Feature Extractor (LTFE) Learning, each entity uses a separate feature extractor and models are trained using concatenated features from all entities. As a baseline, in Cooperatively Trained Feature Extractor (CTFE) Learning, the entities train models by sharing raw data. Secure multi-party algorithms are utilized to train models without revealing data or features in plain text. We investigate the trade-offs among SFE, LTFE, and CTFE in regard to performance, privacy leakage (using an off-the-shelf membership inference attack), and computational cost. LTFE provides the most privacy, followed by SFE, and then CTFE. Computational cost is lowest for SFE and the relative speed of CTFE and LTFE depends on network architecture. CTFE and LTFE provide the best accuracy. We use MNIST, a synthetic dataset, and a credit card fraud detection dataset for evaluations.
translated by 谷歌翻译